Deep learning has been widely used for protein engineering. However, it is limited by the lack of sufficient experimental data to train an accurate model for predicting the functional fitness of high-order mutants. Here, we develop SESNet, a supervised deep-learning model to predict the fitness for protein mutants by leveraging both sequence and structure information, and exploiting attention mechanism. Our model integrates local evolutionary context from homologous sequences, the global evolutionary context encoding rich semantic from the universal protein sequence space and the structure information accounting for the microenvironment around each residue in a protein. We show that SESNet outperforms state-of-the-art models for predicting the sequence-function relationship on 26 deep mutational scanning datasets. More importantly, we propose a data augmentation strategy by leveraging the data from unsupervised models to pre-train our model. After that, our model can achieve strikingly high accuracy in prediction of the fitness of protein mutants, especially for the higher order variants (> 4 mutation sites), when finetuned by using only a small number of experimental mutation data (<50). The strategy proposed is of great practical value as the required experimental effort, i.e., producing a few tens of experimental mutation data on a given protein, is generally affordable by an ordinary biochemical group and can be applied on almost any protein.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Contemporary methods have shown promising results on cardiac image segmentation, but merely in static learning, i.e., optimizing the network once for all, ignoring potential needs for model updating. In real-world scenarios, new data continues to be gathered from multiple institutions over time and new demands keep growing to pursue more satisfying performance. The desired model should incrementally learn from each incoming dataset and progressively update with improved functionality as time goes by. As the datasets sequentially delivered from multiple sites are normally heterogenous with domain discrepancy, each updated model should not catastrophically forget previously learned domains while well generalizing to currently arrived domains or even unseen domains. In medical scenarios, this is particularly challenging as accessing or storing past data is commonly not allowed due to data privacy. To this end, we propose a novel domain-incremental learning framework to recover past domain inputs first and then regularly replay them during model optimization. Particularly, we first present a style-oriented replay module to enable structure-realistic and memory-efficient reproduction of past data, and then incorporate the replayed past data to jointly optimize the model with current data to alleviate catastrophic forgetting. During optimization, we additionally perform domain-sensitive feature whitening to suppress model's dependency on features that are sensitive to domain changes (e.g., domain-distinctive style features) to assist domain-invariant feature exploration and gradually improve the generalization performance of the network. We have extensively evaluated our approach with the M&Ms Dataset in single-domain and compound-domain incremental learning settings with improved performance over other comparison approaches.
translated by 谷歌翻译
在整个计算科学中,越来越需要利用原始计算马力的持续改进,通过对蛮力的尺度锻炼的尺度增加,以增加网状元素数量的增加。例如,如果不考虑分子水平的相互作用,就不可能对纳米多孔介质的转运进行定量预测,即从紧密的页岩地层提取至关重要的碳氢化合物。同样,惯性限制融合模拟依赖于数值扩散来模拟分子效应,例如非本地转运和混合,而无需真正考虑分子相互作用。考虑到这两个不同的应用程序,我们开发了一种新颖的功能,该功能使用主动学习方法来优化局部细尺度模拟的使用来告知粗尺度流体动力学。我们的方法解决了三个挑战:预测连续性粗尺度轨迹,以推测执行新的精细分子动力学计算,动态地更新细度计算中的粗尺度,并量化神经网络模型中的不确定性。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
基于深度学习的视网膜病变分割方法通常需要大量精确的像素注释数据。但是,概述病变区域的圆形或椭圆等粗糙注释的效率可能是像素级注释的六倍。因此,本文提出了一个注释细化网络,以将粗注释转换为像素级分割掩码。我们的主要新颖性是原型学习范式的应用来增强不同数据集或类型病变的概括能力。我们还引入了一个原型称量模块,以处理过度较小的病变的具有挑战性的病例。提出的方法对公开可用的IDRID数据集进行了培训,然后概括为公共DDR和我们的现实世界私人数据集。实验表明,我们的方法显着改善了初始的粗蒙版,并以较大的边缘优于非概率基线。此外,我们证明了原型称量模块在跨数据库和跨阶级设置中的实用性。
translated by 谷歌翻译
如今,收集来自不同环境的特征和响应对的观察已经变得越来越普遍。结果,由于分布变化,必须将学习的预测变量应用于具有不同分布的数据。一种原则性的方法是采用结构性因果模型来描述培训和测试模型,遵循不变性原则,该原理说响应的条件分布鉴于其预测因素在整个环境中保持不变。但是,当响应干预时,在实际情况下可能会违反该原则。一个自然的问题是,是否仍然可以识别其他形式的不变性来促进在看不见的环境中的预测。为了阐明这种具有挑战性的情况,我们引入了不变的匹配属性(IMP),这是通过附加功能捕获干预措施的明确关系。这导致了一种替代形式的不变性形式,该形式能够对响应进行统一的一般干预措施。我们在离散环境设置和连续环境设置下分析了我们方法的渐近概括误差,在该设置中,通过将其与半磁头变化的系数模型相关联来处理连续情况。我们提出的算法与各种实验环境中的现有方法相比表现出竞争性能。
translated by 谷歌翻译
为了使婴儿脑瘫(CP)的早期医疗干预,早期诊断出脑损伤至关重要。尽管一般运动评估(GMA)在早期CP检测中显示出令人鼓舞的结果,但它很费力。大多数现有作品都以视频为输入,以对GMA自动化进行烦躁的动作(FMS)分类。这些方法需要对视频进行完整的观察,并且无法本地化包含正常FMS的视频帧。因此,我们提出了一种名为WO-GMA的新颖方法,以在弱监督的在线环境中执行FMS本地化。首先将婴儿体重点作为WO-GMA的输入提取。然后,WO-GMA执行本地时空提取,然后进行两个网络分支,以生成伪夹标签和模型在线操作。凭借剪辑级伪标签,动作建模分支学会以在线方式检测FMS。具有757个不同婴儿视频的数据集上的实验结果表明,WO-GMA可以获得最新的视频级别分类和Cliplevel检测结果。此外,仅需要前20%的视频持续时间才能获得与完全观察到的分类结果,这意味着FMS诊断时间大大缩短了。代码可在以下网址获得:https://github.com/scofiedluo/wo-gma。
translated by 谷歌翻译
开放世界对象检测(OWOD)是一个具有挑战性的计算机视觉问题,需要检测未知对象并逐渐学习已确定的未知类别。但是,它不能将未知实例区分为多个未知类。在这项工作中,我们提出了一个新颖的OWOD问题,称为未知分类的开放世界对象检测(UC-OWOD)。 UC-OWOD旨在检测未知实例并将其分类为不同的未知类别。此外,我们制定问题并设计一个两阶段的对象检测器来解决UC-OWOD。首先,使用未知的标签意见建议和未知歧视性分类头用于检测已知和未知对象。然后,构建基于相似性的未知分类和未知聚类改进模块,以区分多个未知类别。此外,设计了两个新颖的评估方案,以评估未知类别的检测。丰富的实验和可视化证明了该方法的有效性。代码可在https://github.com/johnwuzh/uc-owod上找到。
translated by 谷歌翻译